Current Issue : January-March Volume : 2025 Issue Number : 1 Articles : 5 Articles
Brain-computer interfaces (BCI) enable users to control devices through their brain activity. Motor imagery (MI), the neural activity resulting from an individual imagining performing a movement, is a common control paradigm. This study introduces a user-centric evaluation protocol for assessing the performance and user experience of an MI-based BCI control system utilizing augmented reality. Augmented reality is employed to enhance user interaction by displaying environment-aware actions, and guiding users on the necessary imagined movements for specific device commands. One of the major gaps in existing research is the lack of comprehensive evaluation methodologies, particularly in real-world conditions. To address this gap, our protocol combines quantitative and qualitative assessments across three phases. In the initial phase, the BCI prototype’s technical robustness is validated. Subsequently, the second phase involves a performance assessment of the control system. The third phase introduces a comparative analysis between the prototype and an alternative approach, incorporating detailed user experience evaluations through questionnaires and comparisons with non-BCI control methods. Participants engage in various tasks, such as object sorting, picking and placing, and playing a board game using the BCI control system. The evaluation procedure is designed for versatility, intending applicability beyond the specific use case presented. Its adaptability enables easy customization to meet the specific user requirements of the investigated BCI control application. This user-centric evaluation protocol offers a comprehensive framework for iterative improvements to the BCI prototype, ensuring technical validation, performance assessment, and user experience evaluation in a systematic and user-focused manner....
Brain–computer interfaces (BCIs) enable direct communication between the brain and external devices using electroencephalography (EEG) signals. BCIs based on code-modulated visual evoked potentials (cVEPs) are based on visual stimuli, thus appropriate visual feedback on the interface is crucial for an effective BCI system. Many previous studies have demonstrated that implementing visual feedback can improve information transfer rate (ITR) and reduce fatigue. This research compares a dynamic interface, where target boxes change their sizes based on detection certainty, with a threshold bar interface in a three-step cVEP speller. In this study, we found that both interfaces perform well, with slight variations in accuracy, ITR, and output characters per minute (OCM). Notably, some participants showed significant performance improvements with the dynamic interface and found it less distracting compared to the threshold bars. These results suggest that while average performance metrics are similar, the dynamic interface can provide significant benefits for certain users. This study underscores the potential for personalized interface choices to enhance BCI user experience and performance. By improving user friendliness, performance, and reducing distraction, dynamic visual feedback could optimize BCI technology for a broader range of users....
Functional electrical stimulation (FES) can support functional restoration of a paretic limb poststroke. Hebbian plasticity depends on temporally coinciding pre- and post-synaptic activity. A tight temporal relationship between motor cortical (MC) activity associated with attempted movement and FES-generated visuo-proprioceptive feedback is hypothesized to enhance motor recovery. Using a brain–computer interface (BCI) to classify MC spectral power in electroencephalographic (EEG) signals to trigger FES-delivery with detection of movement attempts improved motor outcomes in chronic stroke patients. We hypothesized that heightened neural plasticity earlier post-stroke would further enhance corticomuscular functional connectivity and motor recovery. We compared subcortical nondominant hemisphere stroke patients in BCI-FES and Random-FES (FES temporally independent of MC movement attempt detection) groups. The primary outcome measure was the Fugl-Meyer Assessment, Upper Extremity (FMA-UE). We recorded high-density EEG and transcranial magnetic stimulation-induced motor evoked potentials before and after treatment. The BCI group showed greater: FMA-UE improvement; motor evoked potential amplitude; beta oscillatory power and longrange temporal correlation reduction over contralateral MC; and corticomuscular coherence with contralateral MC. These changes are consistent with enhanced post-stroke motor improvement when movement is synchronized with MC activity reflecting attempted movement....
Depictions of robots as romantic partners for humans are frequent in popular culture. As robots become part of human society, they will gradually assume the role of partners for humans whenever necessary, as assistants, collaborators, or companions. Companion robots are supposed to provide social contact to those who would not have it otherwise. These companion robots are usually not designed to fulfill one of the most important human needs: the one for romantic and intimate contact. Human–robot intimacy remains a vastly unexplored territory. In this article, we review the state-of-the-art research in intimate robotics. We discuss major issues limiting the acceptance of robots as intimate partners, the public perception of robots in intimate roles, and the possible influence of cross-cultural differences in these domains. We also discuss the possible negative effects human–robot intimacy may have on human–human contact. Most importantly, we propose a new term “intimate companion robots” to reduce the negative connotations of the other terms that have been used so far and improve the social perception of research in this domain. With this article, we provide an outlook on prospects for the development of intimate companion robots, considering the specific context of their use....
Gesture recognition is crucial in computer vision-based applications, such as drone control, gaming, virtual and augmented reality (VR/AR), and security, especially in human–computer interaction (HCI)-based systems. There are two types of gesture recognition systems, i.e., static and dynamic. However, our focus in this paper is on dynamic gesture recognition. In dynamic hand gesture recognition systems, the sequences of frames, i.e., temporal data, pose significant processing challenges and reduce efficiency compared to static gestures. These data become multi-dimensional compared to static images because spatial and temporal data are being processed, which demands complex deep learning (DL) models with increased computational costs. This article presents a novel triple-layer algorithm that efficiently reduces the 3D feature map into 1D row vectors and enhances the overall performance. First, we process the individual images in a given sequence using the MediaPipe framework and extract the regions of interest (ROI). The processed cropped image is then passed to the Inception-v3 for the 2D feature extractor. Finally, a long short-term memory (LSTM) network is used as a temporal feature extractor and classifier. Our proposed method achieves an average accuracy of more than 89.7%. The experimental results also show that the proposed framework outperforms existing state-of-the-art methods....
Loading....